skip to main content


Search for: All records

Creators/Authors contains: "Spiropulu, Maria"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    High-dimensional quantum entanglement is a cornerstone for advanced technology enabling large-scale noise-tolerant quantum systems, fault-tolerant quantum computing, and distributed quantum networks. The recently developed biphoton frequency comb (BFC) provides a powerful platform for high-dimensional quantum information processing in its spectral and temporal quantum modes. Here we propose and generate a singly-filtered high-dimensional BFC via spontaneous parametric down-conversion by spectrally shaping only the signal photons with a Fabry-Pérot cavity. High-dimensional energy-time entanglement is verified through Franson-interference recurrences and temporal correlation with low-jitter detectors. Frequency- and temporal- entanglement of our singly-filtered BFC is then quantified by Schmidt mode decomposition. Subsequently, we distribute the high-dimensional singly-filtered BFC state over a 10 km fiber link with a post-distribution time-bin dimension lower bounded to be at least 168. Our demonstrations of high-dimensional entanglement and entanglement distribution show the singly-filtered quantum frequency comb’s capability for high-efficiency quantum information processing and high-capacity quantum networks.

     
    more » « less
  2. Abstract

    We present an application of anomaly detection techniques based on deep recurrent autoencoders (AEs) to the problem of detecting gravitational wave (GW) signals in laser interferometers. Trained on noise data, this class of algorithms could detect signals using an unsupervised strategy, i.e. without targeting a specific kind of source. We develop a custom architecture to analyze the data from two interferometers. We compare the obtained performance to that obtained with other AE architectures and with a convolutional classifier. The unsupervised nature of the proposed strategy comes with a cost in terms of accuracy, when compared to more traditional supervised techniques. On the other hand, there is a qualitative gain in generalizing the experimental sensitivity beyond the ensemble of pre-computed signal templates. The recurrent AE outperforms other AEs based on different architectures. The class of recurrent AEs presented in this paper could complement the search strategy employed for GW detection and extend the discovery reach of the ongoing detection campaigns.

     
    more » « less
  3. Abstract

    In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resolution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton–proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computing platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural network optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark–antiquark pairs produced in proton–proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.

     
    more » « less
  4. Abstract The Exa.TrkX project has applied geometric learning concepts such as metric learning and graph neural networks to HEP particle tracking. Exa.TrkX’s tracking pipeline groups detector measurements to form track candidates and filters them. The pipeline, originally developed using the TrackML dataset (a simulation of an LHC-inspired tracking detector), has been demonstrated on other detectors, including DUNE Liquid Argon TPC and CMS High-Granularity Calorimeter. This paper documents new developments needed to study the physics and computing performance of the Exa.TrkX pipeline on the full TrackML dataset, a first step towards validating the pipeline using ATLAS and CMS data. The pipeline achieves tracking efficiency and purity similar to production tracking algorithms. Crucially for future HEP applications, the pipeline benefits significantly from GPU acceleration, and its computational requirements scale close to linearly with the number of particles in the event. 
    more » « less
  5. Abstract

    Quantum transduction, the process of converting quantum signals from one form of energy to another, is an important area of quantum science and technology. The present perspective article reviews quantum transduction between microwave and optical photons, an area that has recently seen a lot of activity and progress because of its relevance for connecting superconducting quantum processors over long distances, among other applications. Our review covers the leading approaches to achieving such transduction, with an emphasis on those based on atomic ensembles, opto-electro-mechanics, and electro-optics. We briefly discuss relevant metrics from the point of view of different applications, as well as challenges for the future.

     
    more » « less
  6. null (Ed.)
  7. Abstract Many measurements at the LHC require efficient identification of heavy-flavour jets, i.e. jets originating from bottom (b) or charm (c) quarks. An overview of the algorithms used to identify c jets is described and a novel method to calibrate them is presented. This new method adjusts the entire distributions of the outputs obtained when the algorithms are applied to jets of different flavours. It is based on an iterative approach exploiting three distinct control regions that are enriched with either b jets, c jets, or light-flavour and gluon jets. Results are presented in the form of correction factors evaluated using proton-proton collision data with an integrated luminosity of 41.5 fb -1 at  √s = 13 TeV, collected by the CMS experiment in 2017. The closure of the method is tested by applying the measured correction factors on simulated data sets and checking the agreement between the adjusted simulation and collision data. Furthermore, a validation is performed by testing the method on pseudodata, which emulate various mismodelling conditions. The calibrated results enable the use of the full distributions of heavy-flavour identification algorithm outputs, e.g. as inputs to machine-learning models. Thus, they are expected to increase the sensitivity of future physics analyses. 
    more » « less